457 research outputs found

    Warkany Syndrome: A Rare Case Report

    Get PDF
    Warkany syndrome 2 or Trisomy 8 mosaicism (T8M) is a well-described, but very rare, chromosomal abnormality. The phenotype is extremely variable ranging from normal individual to severe malformation syndrome and because of this variability, this condition often goes undiagnosed. We report trisomy 8 mosaicism (T8M) in a 3-year-old boy evaluated for facial dysmorphism and delayed development

    Post-quantum cryptographic hardware primitives

    Full text link
    The development and implementation of post-quantum cryptosystems have become a pressing issue in the design of secure computing systems, as general quantum computers have become more feasible in the last two years. In this work, we introduce a set of hardware post-quantum cryptographic primitives (PCPs) consisting of four frequently used security components, i.e., public-key cryptosystem (PKC), key exchange (KEX), oblivious transfer (OT), and zero-knowledge proof (ZKP). In addition, we design a high speed polynomial multiplier to accelerate these primitives. These primitives will aid researchers and designers in constructing quantum-proof secure computing systems in the post-quantum era.Published versio

    A Lightweight McEliece Cryptosystem Co-processor Design

    Full text link
    Due to the rapid advances in the development of quantum computers and their susceptibility to errors, there is a renewed interest in error correction algorithms. In particular, error correcting code-based cryptosystems have reemerged as a highly desirable coding technique. This is due to the fact that most classical asymmetric cryptosystems will fail in the quantum computing era. Quantum computers can solve many of the integer factorization and discrete logarithm problems efficiently. However, code-based cryptosystems are still secure against quantum computers, since the decoding of linear codes remains as NP-hard even on these computing systems. One such cryptosystem is the McEliece code-based cryptosystem. The original McEliece code-based cryptosystem uses binary Goppa code, which is known for its good code rate and error correction capability. However, its key generation and decoding procedures have a high computation complexity. In this work we propose a design and hardware implementation of an public-key encryption and decryption co-processor based on a new variant of McEliece system. This co-processor takes the advantage of the non-binary Orthogonal Latin Square Codes to achieve much smaller computation complexity, hardware cost, and the key size.Comment: 2019 Boston Area Architecture Workshop (BARC'19

    A lightweight McEliece cryptosystem co-processor design

    Full text link
    Due to the rapid advances in the development of quantum computers and their susceptibility to errors, there is a renewed interest in error correction algorithms. In particular, error correcting code-based cryptosystems have reemerged as a highly desirable coding technique. This is due to the fact that most classical asymmetric cryptosystems will fail in the quantum computing era. Quantum computers can solve many of the integer factorization and discrete logarithm problems efficiently. However, code-based cryptosystems are still secure against quantum computers, since the decoding of linear codes remains as NP-hard even on these computing systems. One such cryptosystem is the McEliece code-based cryptosystem. The original McEliece code-based cryptosystem uses binary Goppa code, which is known for its good code rate and error correction capability. However, its key generation and decoding procedures have a high computation complexity. In this work we propose a design and hardware implementation of an public-key encryption and decryption co-processor based on a new variant of McEliece system. This co-processor takes the advantage of the non-binary Orthogonal Latin Square Codes to achieve much smaller computation complexity, hardware cost, and the key size.Published versio

    Enhancing Image Quality: A Comparative Study of Spatial, Frequency Domain, and Deep Learning Methods

    Get PDF
    Image restoration and noise reduction methods have been created to restore deteriorated images and improve their quality. These methods have garnered substantial significance in recent times, mainly due to the growing utilization of digital imaging across diverse domains, including but not limited to medical imaging, surveillance, satellite imaging, and numerous others. In this paper, we conduct a comparative analysis of three distinct approaches to image restoration: the spatial method, the frequency domain method, and the deep learning method. The study was conducted on a dataset of 10,000 images, and the performance of each method was evaluated using the accuracy and loss metrics. The results show that the deep learning method outperformed the other two methods, achieving a validation accuracy of 72.68% after 10 epochs. The spatial method had the lowest accuracy of the three, achieving a validation accuracy of 69.98% after 10 epochs. The FFT frequency domain method had a validation accuracy of 52.87% after 10 epochs, significantly lower than the other two methods. The study demonstrates that deep learning is a promising approach for image classification tasks and outperforms traditional methods such as spatial and frequency domain techniques

    A Comprehensive Review of Image Restoration and Noise Reduction Techniques

    Get PDF
    Images play a crucial role in modern life and find applications in diverse fields, ranging from preserving memories to conducting scientific research. However, images often suffer from various forms of degradation such as blur, noise, and contrast loss. These degradations make images difficult to interpret, reduce their visual quality, and limit their practical applications. To overcome these challenges, image restoration and noise reduction techniques have been developed to recover degraded images and enhance their quality. These techniques have gained significant importance in recent years, especially with the increasing use of digital imaging in various fields such as medical imaging, surveillance, satellite imaging, and many others. This paper presents a comprehensive review of image restoration and noise reduction techniques, encompassing spatial and frequency domain methods, and deep learning-based techniques. The paper also discusses the evaluation metrics utilized to assess the effectiveness of these techniques and explores future research directions in this field. The primary objective of this paper is to offer a comprehensive understanding of the concepts and methods involved in image restoration and noise reduction

    Análisis predictivo del cáncer de mama utilizando técnicas de aprendizaje automático

    Get PDF
    This paper is a product of the research Project “Predictive Analysis Of Breast Cancer Using Machine Learning Techniques” performed in Manav Rachna International Institute of Research and Studies, Faridabad in the year 2018. Introduction: The present article is part of the effort to predict breast cancer which is a serious concern for women’s health. Problem: Breast cancer is the most common type of cancer and has always been a threat to women’s lives. Early diagnosis requires an effective method to predict cancer to allow physicians to distinguish benign and malicious cancer. Researchers and scientists have been trying hard to find innovative methods to predict cancer. Objective: The objective of this paper will be predictive analysis of breast cancer using various machine learning techniques like Naïve Bayes method, Linear Discriminant Analysis, K-Nearest Neighbors and Support Vector Machine method.  Methodology: Predictive data mining has become an instrument for scientists and researchers in the medical field. Predicting breast cancer at an early stage helps in better cure and treatment. KDD (Knowledge Discovery in Databases) is one of the most popular data mining methods used by medical researchers to identify the patterns and the relationship between variables and also helps in predicting the outcome of the disease based upon historical data of datasets. Results: To select the best model for cancer prediction, accuracy of all models will be estimated and the best model will be selected. Conclusion: This work seeks to predict the best technique with highest accuracy for breast cancer. Originality: This research has been performed using R and the dataset taken from UCI machine learning repository. Limitations: The lack of exact information provided by data.Este artículo es producto del proyecto de investigación “Análisis predictivo del cáncer de mama utilizando técnicas de aprendizaje automático” realizado en el Instituto Internacional de Investigación y Estudios Manav Rachna, Faridabad, en el año 2018.  Introducción: el presente artículo es parte de un esfuerzo para predecir el cáncer de seno, lo cual es una preocupación seria para la salud de las mujeres.  Problema: el cáncer de mama es el tipo más común de cáncer y siempre ha sido una amenaza para la vida de las mujeres. El diagnóstico precoz requiere un método efectivo para predecir el cáncer que permita a los médicos distinguir el cáncer benigno y el maligno. Investigadores y científicos han estado tratando de encontrar métodos innovadores para predecir el cáncer. Objetivo: el objetivo de esta investigación es el análisis predictivo del cáncer de seno utilizando diversastécnicas de aprendizaje automático, como el método Naïve Bayes, el análisis discriminante lineal, K-NearestNeighbors y el método de máquina de vectores de apoyo. Metodología: la minería de datos predictivos se ha convertido en un instrumento para científicos e investigadores en el campo de la medicina. La predicción del cáncer de mama en una etapa temprana ayuda a una mejor cura y tratamiento. KDD (Knowledge Discovery in Databases) es uno de los métodos de minería de datos más populares utilizados por los investigadores médicos para identificar los patrones y la relación entre las variables y también ayuda a predecir el resultado de la enfermedad en función de los datos históricos de los conjuntos de datos. Resultados: para seleccionar el mejor modelo para la predicción del cáncer, se estimará la precisión de todos los modelos y se seleccionará el mejor modelo. &nbsp
    corecore